Goto

Collaborating Authors

 human beauty


When artificial intelligence goes wrong

#artificialintelligence

Bengaluru: Last year, for the first time ever, an international beauty contest was judged by machines. Thousands of people from across the world submitted their photos to Beauty.AI, hoping that their faces would be selected by an advanced algorithm free of human biases, in the process accurately defining what constitutes human beauty. In preparation, the algorithm had studied hundreds of images of past beauty contests, training itself to recognize human beauty based on the winners. But what was supposed to be a breakthrough moment that would showcase the potential of modern self-learning, artificially intelligent algorithms rapidly turned into an embarrassment for the creators of Beauty.


When artificial intelligence goes wrong

#artificialintelligence

Bengaluru: Last year, for the first time ever, an international beauty contest was judged by machines. Thousands of people from across the world submitted their photos to Beauty.AI, hoping that their faces would be selected by an advanced algorithm free of human biases, in the process accurately defining what constitutes human beauty. In preparation, the algorithm had studied hundreds of images of past beauty contests, training itself to recognize human beauty based on the winners. But what was supposed to be a breakthrough moment that would showcase the potential of modern self-learning, artificially intelligent algorithms rapidly turned into an embarrassment for the creators of Beauty.AI, as the algorithm picked the winners solely on the basis of skin colour. "The algorithm made a fairly non-trivial correlation between skin colour and beauty. A classic example of bias creeping into an algorithm," says Nisheeth K. Vishnoi, an associate professor at the School of Computer and Communication Sciences at Switzerland-based École Polytechnique Fédérale de Lausanne (EPFL).